Authors: Eduarda Centeno & Fernando Santos
With this notebook, we would like to facilitate the computation of different metrics in Network and Topological analysis in neuroscience. Our goal is to cover both standard Graph Theory and a few Topological & Geometric data analysis metrics.
We will not include any step regarding preprocessing of imaging data. The resting state fMRI (rsfMRI) matrices used here (i.e., based in correlation values of time series) were obtained from the The UCLA multimodal connectivity database (1000_Functional_Connectomes dataset - references [1-2]). Nevertheless, one can adapt and use the scripts here provided to networks based in other imaging modalities.
# Basic data manipulation and visualisation libraries
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import glob
# Network Libraries
import networkx as nx
from nxviz import CircosPlot
import community
# Libraries used for Topological Data Analysis
import gudhi
# Magic command to change plotting backend
#%matplotlib qt
# Magic command to load watermark
%load_ext watermark
# Possibility to stop warnings
import warnings
warnings.filterwarnings('ignore')
# Print versions
%watermark --author "Eduarda & Fernando" --date --time --python --machine --iversion --watermark --packages jupyterlab,notebook
Author: Eduarda & Fernando Python implementation: CPython Python version : 3.7.8 IPython version : 7.18.1 jupyterlab: 1.2.0 notebook : 6.1.4 Compiler : MSC v.1916 64 bit (AMD64) OS : Windows Release : 10 Machine : AMD64 Processor : Intel64 Family 6 Model 142 Stepping 10, GenuineIntel CPU cores : 8 Architecture: 64bit community : 0.13 matplotlib: 3.3.2 networkx : 2.4 seaborn : 0.11.0 numpy : 1.18.5 pandas : 1.1.3 gudhi : 3.3.0 Watermark: 2.1.0
matrix = np.genfromtxt('./1000_Functional_Connectomes/Connectivity matrices/AveragedMatrix.txt')
The idea here is to get an average matrix from all matrices available. For that, different methods can be used. We'll show two common ones: one with pandas, another with Numpy.
# Importing all matrices to generate averaged data with Numpy or Pandas
matrices = [np.genfromtxt(file) for file in glob.glob('./1000_Functional_Connectomes/Connectivity matrices/*_matrix_file.txt')]
matricesP = [pd.read_csv(file, header = None, delim_whitespace=True) for file in glob.glob('./1000_Functional_Connectomes/Connectivity matrices/*_matrix_file.txt')]
# Averaging matrices with Numpy
MatAv= np.zeros(shape=matrices[0].shape)
for matrix in matrices:
MatAv += matrix
matrix= MatAv/len(matrices)
# Averaging matrices with Pandas
Pdmatrix = pd.concat(matricesP).groupby(level=0).mean()
# Obtaining name of areas according to matching file
lineList = [line.rstrip('\n') for line in open('./1000_Functional_Connectomes/Region Names/Baltimore_5560_region_names_abbrev_file.txt')]
# Obtaining a random list of numbers to simulate subnetworks -- THESE NUMBERS DO NOT CORRESPOND TO ANY REAL CLASSIFICATION
sublist = [line.rstrip('\n') for line in open('./subnet_ordernames.txt')]
# Obtaining a random list of colors that will match the random subnetwork classification for further graphs -- THESE COLORNAMES DO NOT CORRESPOND TO ANY REAL CLASSIFICATION
colorlist = [line.rstrip('\n') for line in open('./subnet_order_colors.txt')]
# Obtaining a random list of colors (in numbers) that will match the random subnetwork classification for further graphs -- THESE NUMBERS DO NOT CORRESPOND TO ANY REAL CLASSIFICATION
colornumbs = np.genfromtxt('./subnet_colors_number.txt')
After importing the matrix, we can start with a standard representation - heatmaps!
# Creating a DataFrame which will have the rows and column names according to the brain areas
matrixdiagNaN = matrix.copy()
np.fill_diagonal(matrixdiagNaN,np.nan)
Pdmatrix = pd.DataFrame(matrixdiagNaN)
Pdmatrix.columns = lineList
Pdmatrix.index = lineList
Pdmatrix = Pdmatrix.sort_index(0).sort_index(1)
# This mask variable gives you the possibility to plot only half of the correlation matrix.
mask = np.zeros_like(Pdmatrix.values, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
plt.figure(figsize = (20, 20))
_ = sns.heatmap(Pdmatrix, cmap='coolwarm', cbar=True, square=False, mask=None) # To apply the mask, change to mask=mask
When working with network analysis in brain data, a couple of crucial decisions have to be made. For example, one can decide to use all network connections - including low-weight links (sometimes considered spurious connections), or establish an arbitrary threshold and keep only links above a specific correlation value. This step can be done in different ways, based solely on the correlation threshold (as done here), or based on network density (i.e., you keep only the 20% strongest correlations). If using an arbitrary threshold, it is also possible to define if the resulting matrix will be weighted (i.e., keeping the edges' weight), or unweighted (binarised matrices).
Another point of discussion is how to deal with negative weights in weighted networks. A common practice is to absolutise the matrix and preserve the topology. This approach also facilitates the computation of several metrics from graph theory that are not adapted for negative weights. Here, we have chosen to proceed by using the absolute of all connections in the correlation matrix.
We strongly suggest Reference [3] for a deeper understanding on all these decisions.
Figure 1 provides a schematic summary of the types of networks:
Figure 1. Types of networks. (A) A binary directed graph. (B) Binary, undirected graph. In binary graphs, the presence of a connection is signified by a 1 or 0 otherwise. (C) A representation of graph F as a network of brain areas. (D) A weighted, directed graph. (F) A weighted, undirected graph. In a weighted graph, the strength of the connections is represented by a number [0,1]. (G) A connectivity matrix of C and F. Source: Part of the image was obtained from Smart Servier Medical Art.
# Absolutise for further user
matrix = abs(matrix)
matrixdiagNaN = abs(matrixdiagNaN)
When working with fMRI brain network data, it is useful to generate some plots (e.g., the heatmaps above for matrix visualisation, and distribution plots of edge weights) to facilitate data comprehension and flag potential artefacts. In brain networks, we expect mostly weak edges and a smaller proportion of strong ones. When plotted as a probability density of log10, we expect the weight distribution to have a Gaussian-like form [3].
# Weight distribution plot
bins = np.arange(np.sqrt(len(np.concatenate(matrix))))
bins = (bins - np.min(bins))/np.ptp(bins)
fig, axes = plt.subplots(1,2, figsize=(15,5))
# Distribution of raw weights
rawdist = sns.distplot(matrixdiagNaN.flatten(), bins=bins, kde=False, ax=axes[0], norm_hist=True)
rawdist.set(xlabel='Correlation Values', ylabel = 'Density Frequency')
# Probability density of log10
log10dist = sns.distplot(np.log10(matrixdiagNaN).flatten(), kde=False, ax=axes[1], norm_hist=True)
log10dist.set(xlabel='log(weights)')
[Text(0.5, 0, 'log(weights)')]
The metrics that we will cover here are:
Each of these metrics has its requisites for computation. For example, it is not possible to accurately compute closeness centrality and the average shortest path for fragmented networks (i.e., there are subsets of disconnected nodes). Therefore, keep that in mind when thinking about thresholding a matrix.
Figure 2 provides a summary of some graph-theoretical metrics:
Figure 2. Graph theoretical metrics. (A) A representation of a graph indicating centralities. (B) Representation of modularity and clustering coefficient. (C) The shortest path between vertices A and B. (D) The minimum spanning tree.
# Creating a graph
G = nx.from_numpy_matrix(matrix)
# Removing self-loops
G.remove_edges_from(list(nx.selfloop_edges(G)))
Definition: A graph's density is the ratio between the number of edges and the total number of possible edges.
Clearly, in all-to-all connected graphs, the density will be maximal (or 1), whereas for a graph without edges it will be 0. Here, just for the sake of demonstration, we will compute the density of different states of the network to show how density changes.
#print(nx.density.__doc__)
# Create graphs for comparison
matrix2 = matrix.copy()
matrix3 = matrix.copy()
# Create sparser graphs
matrix2[matrix2<=0.50] = 0
matrix3[matrix3<=0.75] = 0
st50G = nx.from_numpy_matrix(matrix2)
st25G = nx.from_numpy_matrix(matrix3)
st50G.remove_edges_from(list(nx.selfloop_edges(st50G)))
st25G.remove_edges_from(list(nx.selfloop_edges(st25G)))
# Compute densities
alltoall = nx.density(G)
st50 = nx.density(st50G)
st25 = nx.density(st25G)
names = ['All-To-All', '> 0.5', '> 0.75']
values = [alltoall, st50, st25]
dict(zip(names, values))
{'All-To-All': 1.0,
'> 0.5': 0.013161273754494093,
'> 0.75': 0.00012840267077555214}
Definition: In undirected weighted networks the node strength can be computed as the sum of the connectivity weights of the edges attached to each node. It is a primary metric to identify how important is a node in the graph. It is possible to apply a normalization (divide the weights by 1/N-1) to make the output value more intuitive. (Reference [3] pg. 119)
In degree computation, it is also common to compute the mean degree of the network, which is the sum of node degrees divides by the total number of nodes.
# Computation of nodal degree/strength
#print(nx.degree.__doc__)
strength = G.degree(weight='weight')
strengths = {node: val for (node, val) in strength}
nx.set_node_attributes(G, dict(strength), 'strength') # Add as nodal attribute
# Normalized node strength values 1/N-1
normstrenghts = {node: val * 1/(len(G.nodes)-1) for (node, val) in strength}
nx.set_node_attributes(G, normstrenghts, 'strengthnorm') # Add as nodal attribute
# Computing the mean degree of the network
normstrengthlist = np.array([val * 1/(len(G.nodes)-1) for (node, val) in strength])
mean_degree = np.sum(normstrengthlist)/len(G.nodes)
print(mean_degree)
0.11425288701460302
Centralities are frequently used to understand which nodes occupy critical positions in the network.
Remember:
Degree Centrality: The degree centrality for a node v is the fraction of nodes it is connected to. This metric is the same as node degree, so it will not be computed again. (NetworkX Documentation [4])
Closeness Centrality: In weighted graphs, the closeness centrality of a node v is the reciprocal of the sum of the shortest weighted path distances from v to all N-1 other nodes. An important thing to think about this metric is that a node with many low weight edges can have the same centrality as a node with only a few high-weighted edges. (NetworkX Documentation, Reference [3] - Chapter 5)
Betweenness Centrality: Betweenness centrality of a node v is the sum of the fraction of all-pairs shortest paths that pass through v. (NetworkX Documentation [4])
Eigenvector Centrality: Eigenvector centrality computes the centrality for a node based on its neighbours' centrality. It takes into account not only quantity (e.g., degree centrality) but also quality. If a node is linked to many nodes that also display a high degree, that node will have high eigenvector centrality. (NetworkX Documentation)
Page Rank: PageRank computes a ranking of the nodes in the graph G based on the incoming links' structure. (NetworkX Documentation [4])
# Closeness centrality
#print(nx.closeness_centrality.__doc__)
# The function accepts a argument 'distance' that, in correlation-based networks, must be seen as the inverse ...
# of the weight value. Thus, a high correlation value (e.g., 0.8) means a shorter distance (i.e., 0.2).
G_distance_dict = {(e1, e2): 1 / abs(weight) for e1, e2, weight in G.edges(data='weight')}
# Then add them as attributes to the graph edges
nx.set_edge_attributes(G, G_distance_dict, 'distance')
# Computation of Closeness Centrality
closeness = nx.closeness_centrality(G, distance='distance')
# Now we add the closeness centrality value as an attribute to the nodes
nx.set_node_attributes(G, closeness, 'closecent')
# Visualise values directly
#print(closeness)
# Closeness Centrality Histogram
sns.distplot(list(closeness.values()), kde=False, norm_hist=False)
plt.xlabel('Centrality Values')
plt.ylabel('Counts')
Text(0, 0.5, 'Counts')
# Betweenness centrality:
#print(nx.betweenness_centrality.__doc__)
betweenness = nx.betweenness_centrality(G, weight='distance', normalized=True)
# Now we add the it as an attribute to the nodes
#nx.set_node_attributes(G, betweenness, 'bc')
# Visualise values directly
#print(betweenness)
# Betweenness centrality Histogram
sns.distplot(list(betweenness.values()), kde=False, norm_hist=False)
plt.xlabel('Centrality Values')
plt.ylabel('Counts')
Text(0, 0.5, 'Counts')
# Eigenvector centrality
#print(nx.eigenvector_centrality.__doc__)
eigen = nx.eigenvector_centrality(G, weight='weight')
# Now we add the it as an attribute to the nodes
nx.set_node_attributes(G, eigen, 'eigen')
# Visualise values directly
#print(eigen)
# Eigenvector centrality Histogram
sns.distplot(list(eigen.values()), kde=False, norm_hist=False)
plt.xlabel('Centrality Values')
plt.ylabel('Counts')
Text(0, 0.5, 'Counts')
# Page Rank
#print(nx.pagerank.__doc__)
pagerank = nx.pagerank(G, weight='weight')
# Add as attribute to nodes
nx.set_node_attributes(G, pagerank, 'pg')
# Visualise values directly
#print(pagerank)
# Page Rank Histogram
sns.distplot(list(pagerank.values()), kde=False, norm_hist=False)
plt.xlabel('Pagerank Values')
plt.ylabel('Counts')
Text(0, 0.5, 'Counts')
Shortest Path: The shortest path (or distance) between two nodes in a graph. In a weighted graph it is obtained by the minimum sum of weights.
Average Path Length: It is a concept in network topology that is defined as the average number of steps along the shortest paths for all possible pairs of network nodes. It is a measure of the efficiency of information or mass transport on a network.
# Path Length
#print(nx.shortest_path_length.__doc__)
# This is a versatile version of the ones below in which one can define or not source and target. Remove the hashtag to use this version.
#list(nx.shortest_path_length(G, weight='distance'))
# This one can also be used if defining source and target:
#print(nx.dijkstra_path_length.__doc__)
nx.dijkstra_path_length(G, source=20, target=25, weight='distance')
# Whereas this one is for all pairs. Remove the hashtag to use this version.
#print(nx.all_pairs_dijkstra_path_length.__doc__)
#list(nx.all_pairs_dijkstra_path_length(G, weight='distance'))
7.81691095077324
# Average Path Length or Characteristic Path Length
#print(nx.average_shortest_path_length.__doc__)
nx.average_shortest_path_length(G, weight='distance')
6.992954715060494
Modularity: Modularity compares the number of edges inside a cluster with the expected number of edges that one would find if the network was connected randomly but with the same number of nodes and node degrees. It is used to identify strongly connected subsets, i.e., modules or 'communities'. Here, we will use the Louvain algorithm, as recommended in Reference [3].
Assortativity: Assortativity measures the similarity of connections in the graph with respect to the node degree. (NetworkX)
Clustering coefficient: a measure of the tendency for any two neighbours of a node to be directly connected. According to Networkx's documentation, weighted graphs' clustering coefficient is defined as the geometric average of the subgraph edge weights. (NetworkX, Reference [4])
Minimum Spanning Tree: it is the backbone of a network, i.e. the minimum set of edges necessary to ensure that paths exist between all nodes. A few main algorithms are used to build the spanning tree, being the Kruskal's algorithm the one used by NetworkX. Briefly, this algorithm ranks the distance of the edges, adds the ones with the smallest distance first, and by adding edge-by-edge, it checks if cycles are formed or not. The algorithm will not add an edge that results in the formation of a cycle.
# Modularity
#print(community.best_partition.__doc__)
#from community import best_partition
part = community.best_partition(G, weight='weight')
# Visualise values directly
#print(part)
# Check the number of communities
set(part.values()).union()
{0, 1, 2, 3, 4}
# Assortativity
#print(nx.degree_pearson_correlation_coefficient.__doc__)
nx.degree_pearson_correlation_coefficient(G, weight='weight')
-0.005681818181818175
# Clustering Coefficient
#print(nx.clustering.__doc__)
clustering = nx.clustering(G, weight='weight')
# Add as attribute to nodes
nx.set_node_attributes(G, clustering, 'cc')
# Visualise values directly
#print(clustering)
# Clustering Coefficient Histogram
sns.distplot(list(clustering.values()), kde=False, norm_hist=False)
plt.xlabel('Clustering Coefficient Values')
plt.ylabel('Counts')
Text(0, 0.5, 'Counts')
# Average Clustering Coefficient
#print(nx.clustering.__doc__)
nx.average_clustering(G, weight='weight')
0.10869657391580578
# Minimum Spanning Tree
GMST = nx.minimum_spanning_tree(G, weight='distance')
Under this section we we'll provide a few ideas of how to visualise and present your network.
First, let's get some important attributes about brain area names and subnetworks. These will be used later for graphical visualisation!
# Function to transform our list of brain areas into a dictionary
def Convert(lst):
res_dct = {i : lst[i] for i in range(0, len(lst))}
return res_dct
# Add brain areas as attribute of nodes
nx.set_node_attributes(G, Convert(lineList), 'area')
# Add node colors
nx.set_node_attributes(G, Convert(colorlist), 'color')
# Add subnetwork attribute
nx.set_node_attributes(G, Convert(sublist), 'subnet')
# Add node color numbers
nx.set_node_attributes(G, Convert(colornumbs), 'colornumb')
Now we will create a standard spring network plot, but this could also be made circular by changing to draw_circular.
We defined the edge widths to the power of 2 so that weak weights will have smaller widths.
# Standard Network graph with nodes in proportion to Graph degrees
plt.figure(figsize=(30,30))
edgewidth = [ d['weight'] for (u,v,d) in G.edges(data=True)]
pos = nx.spring_layout(G, scale=5)
nx.draw(G, pos, with_labels=True, width=np.power(edgewidth, 2), edge_color='grey', node_size=normstrengthlist*20000,
labels=Convert(lineList), font_color='black', node_color=colornumbs/10, cmap=plt.cm.Spectral, alpha=0.7, font_size=9)
#plt.savefig('network.jpeg')
# Let's visualise the Minimum Spanning Tree
plt.figure(figsize=(15,15))
nx.draw(GMST, with_labels=True, alpha=0.7, font_size=9)
# First let's just add some attributes so that it becomes more interesting
nx.set_node_attributes(st50G, dict(st50G.degree(weight='weight')), 'strength')
nx.set_node_attributes(st50G, Convert(lineList), 'area')
nx.set_node_attributes(st50G, Convert(colorlist), 'color')
nx.set_node_attributes(st50G, Convert(sublist), 'subnet')
#edgecolors = {(e1, e2): int((weight+1)**3) for e1, e2, weight in st50G.edges(data='weight')}
# Then add them as attributes to the graph
#nx.set_edge_attributes(st50G, edgecolors, 'edgecolor')
G_distance_dict2 = {(e1, e2): 1 / abs(weight) for e1, e2, weight in st50G.edges(data='weight')}
# Then add them as attributes to the graph
nx.set_edge_attributes(st50G, G_distance_dict2, 'distance')
st50GRL = nx.relabel_nodes(st50G, {i: lineList[i] for i in range(len(lineList))})
# CircosPlot
circ = CircosPlot(st50GRL, figsize=(30,30), node_labels=True, node_label_layout='rotation', node_order='subnet',
edge_color='weight', edge_width='weight', node_color='subnet', node_label_color=True, fontsize=10,
nodeprops={"radius": 2}, group_legend=True, group_label_offset=5)
circ.draw()
circ.sm.colorbar.remove()
labels_networks = sorted(list(set([list(circ.graph.nodes.values())[n][circ.node_color] for n in np.arange(len(circ.nodes))])))
plt.legend(handles=circ.legend_handles,
title="Subnetwork",
ncol=6,
borderpad=1,
shadow=True,
fancybox=True,
loc='best',
fontsize=10,
labels= labels_networks)
plt.show()
# How to get node positions according to https://stackoverflow.com/questions/43541376/how-to-draw-communities-with-networkx
def community_layout(g, partition):
"""
Compute the layout for a modular graph.
Arguments:
----------
g -- networkx.Graph or networkx.DiGraph instance
graph to plot
partition -- dict mapping int node -> int community
graph partitions
Returns:
--------
pos -- dict mapping int node -> (float x, float y)
node positions
"""
pos_communities = _position_communities(g, partition, scale=3.)
pos_nodes = _position_nodes(g, partition, scale=1.)
# combine positions
pos = dict()
for node in g.nodes():
pos[node] = pos_communities[node] + pos_nodes[node]
return pos
def _position_communities(g, partition, **kwargs):
# create a weighted graph, in which each node corresponds to a community,
# and each edge weight to the number of edges between communities
between_community_edges = _find_between_community_edges(g, partition)
communities = set(partition.values())
hypergraph = nx.DiGraph()
hypergraph.add_nodes_from(communities)
for (ci, cj), edges in between_community_edges.items():
hypergraph.add_edge(ci, cj, weight=len(edges))
# find layout for communities
pos_communities = nx.spring_layout(hypergraph, **kwargs)
# set node positions to position of community
pos = dict()
for node, community in partition.items():
pos[node] = pos_communities[community]
return pos
def _find_between_community_edges(g, partition):
edges = dict()
for (ni, nj) in g.edges():
ci = partition[ni]
cj = partition[nj]
if ci != cj:
try:
edges[(ci, cj)] += [(ni, nj)]
except KeyError:
edges[(ci, cj)] = [(ni, nj)]
return edges
def _position_nodes(g, partition, **kwargs):
"""
Positions nodes within communities.
"""
communities = dict()
for node, community in partition.items():
try:
communities[community] += [node]
except KeyError:
communities[community] = [node]
pos = dict()
for ci, nodes in communities.items():
subgraph = g.subgraph(nodes)
pos_subgraph = nx.spring_layout(subgraph, **kwargs)
pos.update(pos_subgraph)
return pos
# Visualisation of Communities/Modularity - Run cells 135 and 137 before!
plt.figure(figsize=(25,25))
values = [part.get(node) for node in G.nodes()]
clust=[i*9000 for i in nx.clustering(G, weight='weight').values()]
nx.draw(G, pos=community_layout(G, part), font_size=8, node_size=clust, node_color=values, width=np.power([ d['weight'] for (u,v,d) in G.edges(data=True)],2),
with_labels=True, labels=Convert(lineList), font_color='black', edge_color='grey', cmap=plt.cm.Spectral, alpha=0.7)
Here, we will cover a few computations that are being applied in Neuroscience:
Persistent homology is a method for computing topological features of a space at different spatial resolutions. With it, we can track homology cycles across simplicial complexes, and determine whether there were homology classes that "persisted" for a long time (Reference [5]). The basic idea is summarized in the illustration below.
Figure 3. Topological data analysis. (A) Illustration of simplexes. (B) Representation of simplexes/cliques of different order being formed in the brain across the filtration process. (C) Barcode respective to panel B, representing the filtration across distances (i.e., the inverse of weights in a correlation matrix). Line A represents cycle A in B. H0-2 indicates the homology groups. (H0 = connected components, H1 = one-dimensional holes, H2 = 2-dimensional holes). (D) Circular projection of how the brain would be connected. (E) Persistence diagram (or Birth/Death plot) obtained from real rsfMRI brain data. In this plot, it is also possible to identify a phase transition between H1 and H2.
# Computation of persistence barcode (http://gudhi.gforge.inria.fr/python/latest/persistence_graphical_tools_user.html)
# Converting to distance matrix
mattop = 1 - matrix
# Computing and plotting barcode
rips_complex = gudhi.RipsComplex(distance_matrix=mattop, max_edge_length=1)
simplex_tree = rips_complex.create_simplex_tree(max_dimension=2)
diag = simplex_tree.persistence()
gudhi.plot_persistence_barcode(diag, legend=True, max_intervals=0)
WARNING:matplotlib:usetex mode requires TeX.
<AxesSubplot:title={'center':'Persistence barcode'}>
# Persistence Diagram
gudhi.plot_persistence_diagram(diag, legend=True, max_intervals=0)
plt.tick_params(axis='both', labelsize=15)
plt.xlabel('Birth', fontsize=15)
plt.ylabel('Death', fontsize=15)
Text(0, 0.5, 'Death')
# Persistence density plots
gudhi.plot_persistence_density(diag, dimension=1)
<AxesSubplot:title={'center':'Persistence density'}, xlabel='Birth', ylabel='Death'>
One way of connecting the geometry of a continuous surface to its topology is by using local curvature and Euler characteristics. Here, we will compute the network curvature at each node to calculate topological phase transitions in brain networks from a local perspective (Reference [6]).
%run "./Background Scripts/Curvature.py"
# Euler entropy (Sχ = ln|χ|) as a function of the correlation threshold level.
plotEuler(matrix,70,0)
# Obtaining the value of curvature for each node at a specific threshold and creating a dictionary with brain region names.
curvvalues = Curv(0.6, matrix)
dict(zip(lineList, curvvalues))
{'LLOCid1': 0.04047619047617901,
'LLOCsd1': -0.3333333333333335,
'RPC1': -0.3,
'LIC1': -0.3214285714285679,
'LPG1': -0.10000000000000156,
'RPC2': -0.16666666666666669,
'RAG1': -0.8333333333333334,
'LPG2': -5.828670879282072e-16,
'LMTGpd1': -0.16666666666666669,
'RFP1': 0.0,
'RPG1': -0.13333333333333303,
'RAG2': 0.16666666666666663,
'RP1': -0.16666666666666669,
'LLOCid2': -0.16666666666666669,
'RPG2': -0.5,
'RT1': 0.3333333333333333,
'LPC1': -0.13333333333333303,
'LIC2': -0.06666666666666776,
'LPG3': 0.3333333333333333,
'RCGad1': 0.03333333333333305,
'LFP1': -0.6666666666666667,
'LPOC1': -0.2357142857142922,
'RFP2': -0.6666666666666667,
'RLOCid1': -0.4380952380952337,
'LPGad1': 1.0,
'RPG3': 0.16666666666666682,
'RCGpd1': 0.0,
'RSPL1': -0.666666666666667,
'RTP1': 1.0,
'LSGpd1': 0.25,
'LPG4': -1.1666666666666667,
'RCOC1': -0.11904761904762862,
'LT1': 0.5,
'RMFG1': -1.4166666666666667,
'RMTGtp1': 1.0,
'RCGad2': -1.1666666666666667,
'LFP2': 1.0,
'ROP1': 0.10476190476190728,
'LT2': 0.3333333333333333,
'RCGpd2': -0.7166666666666666,
'LC1': -0.5,
'RH1': 0.5,
'RMTGpd1': 0.5,
'LPG5': 0.0,
'RFOC1': -0.5,
'LCC1': -0.470238095238142,
'RCGad3': -0.85,
'LAG1': -0.33333333333333337,
'LFOC1': 1.0,
'RPC3': 0.03333333333333305,
'RIC1': -0.06666666666666946,
'RPG4': -5.828670879282072e-16,
'LMFG1': -0.666666666666667,
'RTFCpd1': 0.3333333333333333,
'LTOFC1': 0.5,
'RMFG2': -0.5,
'RPG5': -0.16666666666666669,
'RPT1': -0.29523809523809175,
'LP1': -0.5833333333333335,
'RMTGtp2': 0.3333333333333333,
'LLG1': 0.06547619047616537,
'RFOC2': 0.5,
'LMTGpd2': -0.5,
'LPG6': 0.5,
'LFOC2': 0.5,
'RFP3': -0.5,
'LCGpd1': 0.3333333333333333,
'LTP1': 0.5,
'RCGad4': 0.03333333333333216,
'RPC4': -0.8333333333333334,
'LLOCsd2': -0.9166666666666665,
'RPP1': 0.11666666666666456,
'LC2': 0.08333333333333326,
'RLOCid2': -0.33333333333333337,
'RTFCpd2': 1.0,
'LPG7': -0.01666666666666694,
'RIC2': 0.01785714285711526,
'LPG8': 0.0,
'RSFG1': -0.06666666666666946,
'LA1': -0.16666666666666669,
'RSGpd1': -0.3333333333333335,
'RC1': 0.25,
'LMFG2': 0.0,
'LPG9': -0.33333333333333315,
'LLOCsd3': 0.16666666666666663,
'RPG6': 0.11666666666666653,
'LMTGtp1': 1.0,
'RITGtp1': -0.16666666666666669,
'LMTGad1': 0.5,
'ROP2': 0.05238095238090312,
'LFP3': -0.13333333333333455,
'RLG1': -0.14285714285715234,
'RMFG3': -0.75,
'RSTGpd1': -0.33333333333333337,
'LOP1': -0.01190476190475942,
'RFP4': -0.5166666666666665,
'RTP2': 0.5,
'RPG7': -0.49999999999999967,
'LFOC3': 0.5,
'RFP5': 0.3333333333333333,
'LLOCsd4': -0.08333333333333348,
'RPG8': 0.5,
'LSGad1': -0.5833333333333335,
'LMTGtp2': 0.16666666666666663,
'RCOC2': -0.19999999999999418,
'LIC3': 0.030952380952379732,
'LPGpd1': 0.3333333333333333,
'RPG9': -1.3333333333333346,
'RFP6': 1.0,
'LMFG3': 0.0,
'RMFG4': -0.25,
'RPOC1': -0.9499999999999976,
'LPP1': 0.16666666666666663,
'LOP2': 0.07619047619043795,
'RLOCsd1': 0.25,
'LSFG1': 0.033333333333330634,
'RPG10': 0.0,
'RC2': -0.08333333333333348,
'LLOCsd5': -0.5000000000000002,
'RPP2': 0.07857142857142979,
'ROFG': 0.021428571428526944,
'RFP7': -0.4833333333333355,
'RMTGad1': 1.0,
'LIFGpt1': 0.5,
'ROP3': 0.020238095238020115,
'RLOCid3': -0.33333333333333337,
'RIFGpt1': 0.0,
'LH1': -0.33333333333333337,
'LHG/HaH1': -0.15238095238095561,
'LPC2': -0.08333333333333348,
'RT2': -0.41666666666666674,
'LPG10': 0.5,
'LLOCid3': 0.3333333333333333,
'LMFG4': -1.0,
'RSTGpd2': -0.16666666666666669,
'LPG11': -2.220446049250313e-16,
'RH2': 0.3333333333333333,
'LSPL1': -0.75,
'RPG11': -0.21666666666666695,
'LOP3': 0.00952380952380788,
'LOFG1': 0.06190476190476629,
'LSC1': 0.5,
'RSFG2': -2.220446049250313e-16,
'RPC5': -0.33333333333333337,
'RIFGpo1': -0.5,
'LPG12': -1.0,
'RLOCsd2': -0.8333333333333335,
'LFP4': -1.0,
'RFP8': -0.5,
'LFP5': 0.3333333333333333,
'RLOCsd3': -1.516666666666667,
'LPG13': -0.41666666666666674,
'RTOFC1': -0.041666666666695384,
'LSFG2': 1.0,
'LPC3': -0.633333333333334,
'RLOCid4': 0.04523809523810057,
'LLG2': -0.0321428571429232,
'RP2': 0.3333333333333333,
'LLG3': -0.12499999999999889,
'RSGad1': -0.08333333333333348,
'RTP3': 0.5,
'LJLC/SMC1': 0.16666666666666652,
'LFP6': 0.5,
'LCOC1': -0.3738095238095149,
'RHG/HaH1': -0.08095238095238144,
'RFP9': -0.3000000000000002,
'RSFG3': 1.0,
'LSPL2': 0.5,
'LTOFC2': 0.0035714285713877025,
'LSFG3': -0.9166666666666667,
'RFP10': 0.11666666666666634,
'RLG2': 0.0928571428571418,
'LTP2': -0.9166666666666667,
'LPC4': -0.33333333333333337,
'RTFCad1': 1.0,
'B1': 1.0,
'LPT1': -0.04761904761904867}
# Histogram for curvature distribution at a specific threshold.
curv_values= Curv(0.6, matrix)
sns.distplot(curv_values, kde=False, norm_hist=False)
plt.xlabel('Curvature Values')
plt.ylabel('Counts')
Text(0, 0.5, 'Counts')
[1] Brown JA, Rudie JD, Bandrowski A, Van Horn JD, Bookheimer SY. The UCLA multimodal connectivity database: a web-based platform for brain connectivity matrix sharing and analysis. Front Neuroinform. 2012;6:28. doi: 10.3389/fninf.2012.00028.
[2] Biswal BB, Mennes M, Zuo XN, Gohel S, Kelly C, Smith SM, et al. Toward discovery science of human brain function. Proc Natl Acad Sci U S A. 2010;107(10):4734-9. doi: 10.1073/pnas.0911855107.
[3] Fornito A, Zalesky A, Bullmore E. Fundamentals of brain network analysis. 1st ed. San Diego: Academic Press; 2016.
[4] Hagberg A, Swart P, S Chult D, editors. Exploring network structure, dynamics, and function using NetworkX. Proceedings of the 7th Python in Science conference (SciPy 2008); 2008 Aug 19-24; Pasadena, USA.
[5] Bassett DS, Sporns O. Network neuroscience. Nat Neurosci. 2017;20(3):353. doi: 10.1038/nn.4502.
[6] Santos FAN, Raposo EP, Coutinho-Filho MD, Copelli M, Stam CJ, Douw L. Topological phase transitions in functional brain networks. Phys Rev E. 2019;100(3-1):032414. doi: 10.1103/PhysRevE.100.032414.
The 1000_Functional_Connectomes dataset was downloaded from the The UCLA multimodal connectivity database.